3,329 research outputs found

    Learning Transformations for Clustering and Classification

    Full text link
    A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a a maximally separated structure for data from different subspaces. In this way, we reduce variations within subspaces, and increase separation between subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.Comment: arXiv admin note: substantial text overlap with arXiv:1308.0273, arXiv:1308.027

    Coherent dynamics of domain formation in the Bose Ferromagnet

    Full text link
    We present a theory to describe domain formation observed very recently in a quenched Rb-87 gas, a typical ferromagnetic spinor Bose system. An overlap factor is introduced to characterize the symmetry breaking of M_F=\pm 1 components for the F=1 ferromagnetic condensate. We demonstrate that the domain formation is a co-effect of the quantum coherence and the thermal relaxation. A thermally enhanced quantum-oscillation is observed during the dynamical process of the domain formation. And the spatial separation of domains leads to significant decay of the M_F=0 component fraction in an initial M_F=0 condensate.Comment: 4 pages, 3 figure

    Learning Transformations for Classification Forests

    Full text link
    This work introduces a transformation-based learner model for classification forests. The weak learner at each split node plays a crucial role in a classification tree. We propose to optimize the splitting objective by learning a linear transformation on subspaces using nuclear norm as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same class, and, at the same time, maximizes the separation between different classes, thereby improving the performance of the split function. Theoretical and experimental results support the proposed framework.Comment: arXiv admin note: text overlap with arXiv:1309.207

    The Role of Principal Angles in Subspace Classification

    Full text link
    Subspace models play an important role in a wide range of signal processing tasks, and this paper explores how the pairwise geometry of subspaces influences the probability of misclassification. When the mismatch between the signal and the model is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. The transform presented here (TRAIT) preserves some specific characteristic of each individual class, and this approach is shown to be complementary to a previously developed transform (LRT) that enlarges inter-class distance while suppressing intra-class dispersion. Theoretical results are supported by demonstration of superior classification accuracy on synthetic and measured data even in the presence of significant model mismatch

    LaneNet: Real-Time Lane Detection Networks for Autonomous Driving

    Full text link
    Lane detection is to detect lanes on the road and provide the accurate location and shape of each lane. It severs as one of the key techniques to enable modern assisted and autonomous driving systems. However, several unique properties of lanes challenge the detection methods. The lack of distinctive features makes lane detection algorithms tend to be confused by other objects with similar local appearance. Moreover, the inconsistent number of lanes on a road as well as diverse lane line patterns, e.g. solid, broken, single, double, merging, and splitting lines further hamper the performance. In this paper, we propose a deep neural network based method, named LaneNet, to break down the lane detection into two stages: lane edge proposal and lane line localization. Stage one uses a lane edge proposal network for pixel-wise lane edge classification, and the lane line localization network in stage two then detects lane lines based on lane edge proposals. Please note that the goal of our LaneNet is built to detect lane line only, which introduces more difficulties on suppressing the false detections on the similar lane marks on the road like arrows and characters. Despite all the difficulties, our lane detection is shown to be robust to both highway and urban road scenarios method without relying on any assumptions on the lane number or the lane line patterns. The high running speed and low computational cost endow our LaneNet the capability of being deployed on vehicle-based systems. Experiments validate that our LaneNet consistently delivers outstanding performances on real world traffic scenarios

    Sparse Dictionary-based Attributes for Action Recognition and Summarization

    Full text link
    We present an approach for dictionary learning of action attributes via information maximization. We unify the class distribution and appearance information into an objective function for learning a sparse dictionary of action attributes. The objective function maximizes the mutual information between what has been learned and what remains to be learned in terms of appearance information and class distribution for each dictionary atom. We propose a Gaussian Process (GP) model for sparse representation to optimize the dictionary objective function. The sparse coding property allows a kernel with compact support in GP to realize a very efficient dictionary learning process. Hence we can describe an action video by a set of compact and discriminative action attributes. More importantly, we can recognize modeled action categories in a sparse feature space, which can be generalized to unseen and unmodeled action categories. Experimental results demonstrate the effectiveness of our approach in action recognition and summarization

    Magnetic field-line lengths inside interplanetary magnetic flux ropes

    Full text link
    We report on the detailed and systematic study of field-line twist and length distributions within magnetic flux ropes embedded in Interplanetary Coronal Mass Ejections (ICMEs). The Grad-Shafranov reconstruction method is utilized together with a constant-twist nonlinear force-free (Gold-Hoyle) flux rope model to reveal the close relation between the field-line twist and length in cylindrical flux ropes, based on in-situ Wind spacecraft measurements. We show that the field-line twist distributions within interplanetary flux ropes are inconsistent with the Lundquist model. In particular we utilize the unique measurements of magnetic field-line lengths within selected ICME events as provided by Kahler et al. (2011) based on energetic electron burst observations at 1 AU and the associated type III radio emissions detected by the Wind spacecraft. These direct measurements are compared with our model calculations to help assess the flux-rope interpretation of the embedded magnetic structures. By using the different flux-rope models, we show that the in-situ direct measurements of field-line lengths are consistent with a flux-rope structure with spiral field lines of constant and low twist, largely different from that of the Lundquist model, especially for relatively large-scale flux ropes.Comment: submitted to JGR Special Section: VarSIT

    The distributional hyper-Jacobian determinants in fractional Sobolev spaces

    Full text link
    In this paper we give a positive answer to a question raised by Baer-Jerison in connection with hyper-Jacobian determinants and associated minors in fractional Sobolev spaces. Inspired by recent works of Brezis-Nguyen and Baer-Jerison on the Jacobian and Hessian determinants, we show that the distributional mmth-Jacobian minors of degree rr are weak continuous in fractional Sobolev spaces Wmβˆ’mr,rW^{m-\frac{m}{r},r}, and the result is optimal, satisfying the necessary conditions, in the frame work of fractional Sobolev spaces. In particular, the conditions can be removed in case m=1,2m=1,2, i.e., the mmth-Jacobian minors of degree rr are well defined in Ws,pW^{s,p} if and only if Ws,pβŠ†Wmβˆ’mr,mW^{s,p} \subseteq W^{m-\frac{m}{r},m} in case m=1,2m=1,2.Comment: 19 page

    Virtual CNN Branching: Efficient Feature Ensemble for Person Re-Identification

    Full text link
    In this paper we introduce an ensemble method for convolutional neural network (CNN), called "virtual branching," which can be implemented with nearly no additional parameters and computation on top of standard CNNs. We propose our method in the context of person re-identification (re-ID). Our CNN model consists of shared bottom layers, followed by "virtual" branches, where neurons from a block of regular convolutional and fully-connected layers are partitioned into multiple sets. Each virtual branch is trained with different data to specialize in different aspects, e.g., a specific body region or pose orientation. In this way, robust ensemble representations are obtained against human body misalignment, deformations, or variations in viewing angles, at nearly no any additional cost. The proposed method achieves competitive performance on multiple person re-ID benchmark datasets, including Market-1501, CUHK03, and DukeMTMC-reID

    Random Forests Can Hash

    Full text link
    Hash codes are a very efficient data representation needed to be able to cope with the ever growing amounts of data. We introduce a random forest semantic hashing scheme with information-theoretic code aggregation, showing for the first time how random forest, a technique that together with deep learning have shown spectacular results in classification, can also be extended to large-scale retrieval. Traditional random forest fails to enforce the consistency of hashes generated from each tree for the same class data, i.e., to preserve the underlying similarity, and it also lacks a principled way for code aggregation across trees. We start with a simple hashing scheme, where independently trained random trees in a forest are acting as hashing functions. We the propose a subspace model as the splitting function, and show that it enforces the hash consistency in a tree for data from the same class. We also introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. Experiments on large-scale public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art hashing methods for retrieval tasks
    • …
    corecore